Power log-normal distribution (powerlognorm)#
The power log-normal distribution (SciPy: scipy.stats.powerlognorm) is a continuous distribution on \((0,\infty)\). A useful way to think about it is as a lognormal distribution whose survival function is raised to a power.
Equivalently (when the power parameter is an integer), it describes the minimum of several independent lognormal variables — a natural model for “time to first failure” among multiple components.
Learning goals#
classify
powerlognorm(support, parameters, SciPy mapping)write down the PDF/CDF/SF and derive the quantile function
understand how the power parameter changes the distribution (order-statistics intuition)
compute moments numerically and understand which transforms exist / don’t exist
sample from
powerlognormusing a NumPy-only inverse-transform sampleruse
scipy.stats.powerlognormforpdf,cdf,rvs, andfit
import numpy as np
import plotly
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
import scipy
from scipy import integrate, optimize
from scipy import stats
from scipy.stats import norm
from scipy.stats import powerlognorm as powerlognorm_dist
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=5, suppress=True)
rng = np.random.default_rng(42)
# Record versions for reproducibility (useful when numerical details matter).
VERSIONS = {
"numpy": np.__version__,
"scipy": scipy.__version__,
"plotly": plotly.__version__,
}
VERSIONS
{'numpy': '1.26.2', 'scipy': '1.15.0', 'plotly': '6.5.2'}
1) Title & classification#
Item |
Value |
|---|---|
Name |
|
Type |
Continuous |
Support (standard form) |
\(x \in (0,\infty)\) |
Shape parameters |
\(c>0\) (power), \(s>0\) (log-scale) |
Location/scale |
|
SciPy uses the standard location/scale transform
So the full support is \(x>\text{loc}\).
Unless stated otherwise, we work with the standard form loc=0, scale=1.
2) Intuition & motivation#
What it models#
A clean way to see the “power” is through the survival function (tail probability):
Let \(Y\) be a lognormal variable.
Define a new variable \(X\) whose survival is a powered lognormal survival:
If \(c\) is a positive integer, this has an immediate interpretation:
If \(Y_1,\dots,Y_c\) are i.i.d. copies of \(Y\), then
So for integer \(c\), powerlognorm is the distribution of the minimum of \(c\) i.i.d. lognormals.
Typical real-world use cases#
Reliability / survival: time to first failure among \(c\) components with (approximately) lognormal lifetimes.
Competing risks: multiple independent mechanisms can “trigger” an event; the observed time is the minimum.
Extreme-value flavored modeling: modeling the best-case / smallest outcome when each candidate outcome is lognormal-ish.
Relations to other distributions#
Lognormal: when \(c=1\),
powerlognormreduces exactly to a lognormal with \(\log X \sim \mathcal{N}(0, s^2)\) (in standard form).Order statistics: for integer \(c\), it is the distribution of a minimum.
Lehmann alternatives / exponentiated families: raising a baseline CDF or survival function to a power creates a flexible one-parameter extension.
Power-normal on the log scale: if \(Z = \log X / s\), then \(Z\) has PDF \(c\,\phi(z)\,\Phi(-z)^{c-1}\) — the “power” version of a standard normal.
3) Formal definition#
Let \(\phi\) and \(\Phi\) be the standard normal PDF and CDF:
PDF#
For \(x>0\) and \(c,s>0\), the powerlognorm PDF is
(and \(f(x)=0\) for \(x\le 0\)).
CDF / survival#
A convenient closed form is in terms of the survival function:
Therefore the CDF is
Quantile function (PPF)#
Set \(u=F(x)\). Then \(1-u = \Phi(-\log x/s)^c\), so
This is the basis of inverse-transform sampling.
def powerlognorm_logpdf(x: np.ndarray, c: float, s: float) -> np.ndarray:
"""Log-PDF of the standard powerlognorm(c, s) (loc=0, scale=1)."""
x = np.asarray(x, dtype=float)
c = float(c)
s = float(s)
if c <= 0 or s <= 0:
raise ValueError("c and s must be > 0")
out = np.full_like(x, -np.inf, dtype=float)
mask = x > 0
if np.any(mask):
z = np.log(x[mask]) / s
out[mask] = (
np.log(c)
- np.log(x[mask])
- np.log(s)
+ norm.logpdf(z)
+ (c - 1.0) * norm.logcdf(-z)
)
return out
def powerlognorm_pdf(x: np.ndarray, c: float, s: float) -> np.ndarray:
return np.exp(powerlognorm_logpdf(x, c, s))
def powerlognorm_logsf(x: np.ndarray, c: float, s: float) -> np.ndarray:
"""Log survival function log P(X > x) (standard form)."""
x = np.asarray(x, dtype=float)
c = float(c)
s = float(s)
if c <= 0 or s <= 0:
raise ValueError("c and s must be > 0")
out = np.full_like(x, -np.inf, dtype=float)
mask = x > 0
if np.any(mask):
z = np.log(x[mask]) / s
out[mask] = c * norm.logcdf(-z)
# For x <= 0, P(X > x) = 1, so logsf = 0.
out[~mask] = 0.0
return out
def powerlognorm_sf(x: np.ndarray, c: float, s: float) -> np.ndarray:
return np.exp(powerlognorm_logsf(x, c, s))
def powerlognorm_cdf(x: np.ndarray, c: float, s: float) -> np.ndarray:
"""CDF computed stably from logsf: F = 1 - exp(logsf)."""
return -np.expm1(powerlognorm_logsf(x, c, s))
def powerlognorm_ppf(u: np.ndarray, c: float, s: float) -> np.ndarray:
"""Quantile function F^{-1}(u) for u in (0,1)."""
u = np.asarray(u, dtype=float)
c = float(c)
s = float(s)
if c <= 0 or s <= 0:
raise ValueError("c and s must be > 0")
if np.any((u <= 0) | (u >= 1)):
raise ValueError("u must be in (0,1)")
q = (1.0 - u) ** (1.0 / c)
return np.exp(-s * norm.ppf(q))
4) Moments & properties#
Because powerlognorm is lognormal-like in its right tail, it has all positive moments (mean/variance/etc.), but—like the lognormal—it does not have an MGF for \(t>0\).
Log-scale representation#
Let
A change of variables shows that \(Z\) has PDF
For integer \(c\), this is exactly the PDF of the minimum of \(c\) i.i.d. standard normals.
Raw moments#
For \(k>0\), using \(X=\exp(sZ)\),
From raw moments:
Mean: \(\mathbb{E}[X]=\mu_1'\)
Variance: \(\mathrm{Var}(X)=\mu_2' - (\mu_1')^2\)
Skewness: $\(\gamma_1 = \frac{\mu_3' - 3\mu_2'\mu_1' + 2(\mu_1')^3}{\mathrm{Var}(X)^{3/2}}\)$
Kurtosis (excess): $\(\gamma_2 = \frac{\mu_4' - 4\mu_3'\mu_1' + 6\mu_2'(\mu_1')^2 - 3(\mu_1')^4}{\mathrm{Var}(X)^{2}} - 3\)$
Special case \(c=1\) (lognormal, standard form):
MGF / characteristic function#
MGF \(M_X(t)=\mathbb{E}[e^{tX}]\) is infinite for every \(t>0\) (the tail is too heavy).
The characteristic function \(\varphi_X(t)=\mathbb{E}[e^{itX}]\) exists for all real \(t\), but has no simple closed form; compute numerically (quadrature) or via Monte Carlo.
Entropy#
The differential entropy is
which can be evaluated numerically (SciPy implements entropy).
Hazard function#
Because the survival is a power, the hazard scales simply:
For integer \(c\) this matches the “minimum of \(c\) i.i.d.” interpretation: the minimum’s hazard is \(c\) times the baseline hazard.
def powerlognorm_raw_moment_quad(k: int, c: float, s: float) -> float:
"""Raw moment E[X^k] via 1D quadrature over z = log(x)/s."""
k = int(k)
c = float(c)
s = float(s)
if k <= 0:
raise ValueError("k must be a positive integer")
if c <= 0 or s <= 0:
raise ValueError("c and s must be > 0")
def integrand(z):
# log integrand for stability
log_val = (
np.log(c)
+ (k * s) * z
+ norm.logpdf(z)
+ (c - 1.0) * norm.logcdf(-z)
)
return np.exp(log_val)
val, err = integrate.quad(integrand, -np.inf, np.inf, limit=200)
return float(val)
c_demo, s_demo = 2.2, 0.6
m1 = powerlognorm_raw_moment_quad(1, c_demo, s_demo)
m2 = powerlognorm_raw_moment_quad(2, c_demo, s_demo)
mean_quad = m1
var_quad = m2 - m1**2
mean_scipy, var_scipy, skew_scipy, kurt_scipy = powerlognorm_dist.stats(
c_demo, s_demo, moments="mvsk"
)
mean_quad, var_quad, float(mean_scipy), float(var_scipy)
(0.766598898637241,
0.14357805872728147,
0.7665988986464012,
0.14357805819281622)
# Skew/kurtosis from raw moments (quadrature)
m3 = powerlognorm_raw_moment_quad(3, c_demo, s_demo)
m4 = powerlognorm_raw_moment_quad(4, c_demo, s_demo)
mu = mean_quad
var = var_quad
skew_quad = (m3 - 3 * m2 * mu + 2 * mu**3) / (var ** 1.5)
kurt_excess_quad = (m4 - 4 * m3 * mu + 6 * m2 * mu**2 - 3 * mu**4) / (var**2) - 3
float(skew_quad), float(kurt_excess_quad), float(skew_scipy), float(kurt_scipy)
(1.4125699795241162, 3.6893681890406604, 1.4125700104903471, 3.689368147505827)
# Entropy: SciPy (numerical) vs Monte Carlo estimate
n_mc = 200_000
samples_mc = powerlognorm_dist.rvs(c_demo, s_demo, size=n_mc, random_state=rng)
entropy_scipy = float(powerlognorm_dist.entropy(c_demo, s_demo))
entropy_mc = -float(np.mean(powerlognorm_logpdf(samples_mc, c_demo, s_demo)))
entropy_scipy, entropy_mc
(0.3104859755515168, 0.31121527710431507)
5) Parameter interpretation#
\(s>0\) (log-scale)#
When \(c=1\) (lognormal), \(\log X \sim \mathcal{N}(0, s^2)\), so \(s\) is literally the standard deviation on the log scale.
More generally, \(s\) still controls how spread-out \(\log X\) is.
Increasing \(s\) typically:
increases right-skew and the heaviness of the right tail
pushes more mass toward very small values (because the log scale is wider)
\(c>0\) (power / “minimum-of-\(c\)”)#
From the survival function,
\(c=1\) gives the baseline lognormal.
\(c>1\) makes the survival smaller, so \(X\) tends to be smaller (interpretable as the minimum of more components).
\(0<c<1\) makes the survival larger, yielding a heavier tail than the lognormal baseline.
A practical interpretation (integer \(c\)): you’re observing the earliest among \(c\) independent lognormal times.
# Shape changes: vary c (power) and s (log-scale)
def _x_grid_for_params(param_list, lo=1e-4, hi=0.999):
# Use quantiles to choose a reasonable plotting domain.
xs = []
for c, s in param_list:
xs.append(float(powerlognorm_dist.ppf(lo, c, s)))
xs.append(float(powerlognorm_dist.ppf(hi, c, s)))
x_min = max(min(xs), 1e-12)
x_max = max(xs)
return np.geomspace(x_min, x_max, 500)
# 1) Fix s, vary c
s_fixed = 0.6
cs = [0.5, 1.0, 2.0, 5.0]
params_c = [(c, s_fixed) for c in cs]
x_grid_c = _x_grid_for_params(params_c)
fig = go.Figure()
for c in cs:
fig.add_trace(
go.Scatter(
x=x_grid_c,
y=powerlognorm_dist.pdf(x_grid_c, c, s_fixed),
mode="lines",
name=f"c={c:g}",
)
)
fig.update_layout(
title=f"powerlognorm PDF (vary c, fixed s={s_fixed:g})",
xaxis_title="x",
yaxis_title="pdf(x)",
)
fig.update_xaxes(type="log")
fig.show()
# 2) Fix c, vary s
c_fixed = 2.0
ss = [0.25, 0.5, 1.0]
params_s = [(c_fixed, s) for s in ss]
x_grid_s = _x_grid_for_params(params_s)
fig = go.Figure()
for s in ss:
fig.add_trace(
go.Scatter(
x=x_grid_s,
y=powerlognorm_dist.pdf(x_grid_s, c_fixed, s),
mode="lines",
name=f"s={s:g}",
)
)
fig.update_layout(
title=f"powerlognorm PDF (vary s, fixed c={c_fixed:g})",
xaxis_title="x",
yaxis_title="pdf(x)",
)
fig.update_xaxes(type="log")
fig.show()
6) Derivations#
Expectation (raw moments)#
Start from
Substitute \(z = \log x / s\) so \(x = e^{sz}\) and \(dx = s e^{sz}\,dz = s x\,dz\).
Using the PDF,
This gives a clean 1D integral for numerical evaluation.
Variance#
Compute \(\mu_1'\) and \(\mu_2'\) and use
Likelihood / log-likelihood#
Given i.i.d. data \(x_1,\dots,x_n>0\), the log-likelihood for \((c,s)\) in the standard form is
There’s no closed-form MLE in general; you typically optimize this numerically.
def powerlognorm_nll(params_unconstrained, x):
"""Negative log-likelihood in terms of unconstrained params (log c, log s)."""
log_c, log_s = params_unconstrained
c = np.exp(log_c)
s = np.exp(log_s)
return -np.sum(powerlognorm_logpdf(x, c, s))
# Demo: recover parameters from synthetic data (standard form)
c_true, s_true = 2.5, 0.7
x_obs = powerlognorm_dist.rvs(c_true, s_true, size=4000, random_state=rng)
x0 = np.log([1.0, 0.5])
res = optimize.minimize(powerlognorm_nll, x0=x0, args=(x_obs,), method="Nelder-Mead")
c_hat, s_hat = np.exp(res.x)
(c_true, s_true), (float(c_hat), float(s_hat))
((2.5, 0.7), (2.4902987185051226, 0.6958602222515958))
7) Sampling & simulation (NumPy-only)#
The quantile expression makes inverse-transform sampling straightforward.
Inverse-transform sampling via the survival function#
We have, for \(x>0\),
Let \(U\sim\text{Uniform}(0,1)\) and set it equal to the survival probability:
Then
So sampling reduces to:
draw \(U\sim\text{Unif}(0,1)\)
compute \(Z=\Phi^{-1}(U^{1/c})\)
return \(X=\exp(-sZ)\)
Integer \(c\) shortcut (order-statistics)#
If \(c\in\mathbb{N}\), another NumPy-only method is:
sample \(c\) i.i.d. lognormals and take the minimum.
Both methods appear below. The only “special function” we need is an approximation of the standard normal quantile \(\Phi^{-1}\).
def norm_ppf_acklam(p: np.ndarray) -> np.ndarray:
"""Approximate standard normal quantile (Acklam's rational approximation).
Accuracy is typically ~1e-9 in the central region for float64.
Reference: Peter J. Acklam (2003), "An algorithm for computing the inverse
normal cumulative distribution function".
"""
p = np.asarray(p, dtype=float)
if np.any((p <= 0) | (p >= 1)):
raise ValueError("p must be strictly between 0 and 1")
# Coefficients in rational approximations.
a = np.array(
[
-3.969683028665376e01,
2.209460984245205e02,
-2.759285104469687e02,
1.383577518672690e02,
-3.066479806614716e01,
2.506628277459239e00,
]
)
b = np.array(
[
-5.447609879822406e01,
1.615858368580409e02,
-1.556989798598866e02,
6.680131188771972e01,
-1.328068155288572e01,
]
)
c = np.array(
[
-7.784894002430293e-03,
-3.223964580411365e-01,
-2.400758277161838e00,
-2.549732539343734e00,
4.374664141464968e00,
2.938163982698783e00,
]
)
d = np.array(
[
7.784695709041462e-03,
3.224671290700398e-01,
2.445134137142996e00,
3.754408661907416e00,
]
)
plow = 0.02425
phigh = 1.0 - plow
x = np.empty_like(p, dtype=float)
# Lower region
mask = p < plow
if np.any(mask):
q = np.sqrt(-2.0 * np.log(p[mask]))
num = (((((c[0] * q + c[1]) * q + c[2]) * q + c[3]) * q + c[4]) * q + c[5])
den = ((((d[0] * q + d[1]) * q + d[2]) * q + d[3]) * q + 1.0)
x[mask] = num / den
# Central region
mask = (p >= plow) & (p <= phigh)
if np.any(mask):
q = p[mask] - 0.5
r = q * q
num = (((((a[0] * r + a[1]) * r + a[2]) * r + a[3]) * r + a[4]) * r + a[5]) * q
den = (((((b[0] * r + b[1]) * r + b[2]) * r + b[3]) * r + b[4]) * r + 1.0)
x[mask] = num / den
# Upper region
mask = p > phigh
if np.any(mask):
q = np.sqrt(-2.0 * np.log(1.0 - p[mask]))
num = (((((c[0] * q + c[1]) * q + c[2]) * q + c[3]) * q + c[4]) * q + c[5])
den = ((((d[0] * q + d[1]) * q + d[2]) * q + d[3]) * q + 1.0)
x[mask] = -(num / den)
return x
def powerlognorm_rvs_numpy(c: float, s: float, size: int | tuple[int, ...], rng) -> np.ndarray:
"""NumPy-only sampler for the standard powerlognorm(c, s) distribution."""
c = float(c)
s = float(s)
if c <= 0 or s <= 0:
raise ValueError("c and s must be > 0")
u = rng.random(size=size)
# Avoid exact 0 which would produce -inf in log/ppf.
u = np.maximum(u, np.nextafter(0.0, 1.0))
p = np.exp(np.log(u) / c) # u**(1/c), but stable for extreme c
z = norm_ppf_acklam(p)
return np.exp(-s * z)
def powerlognorm_rvs_numpy_integer_c(c_int: int, s: float, size: int, rng) -> np.ndarray:
"""Integer-c shortcut: min of c lognormals (standard form)."""
c_int = int(c_int)
s = float(s)
if c_int <= 0 or s <= 0:
raise ValueError("c_int must be >= 1 and s must be > 0")
z = rng.normal(size=(c_int, size)).min(axis=0)
return np.exp(s * z)
# Quick correctness check (compare against SciPy moments)
c_chk, s_chk = 2.0, 0.5
n_chk = 200_000
samples_numpy = powerlognorm_rvs_numpy(c_chk, s_chk, size=n_chk, rng=rng)
m_numpy = samples_numpy.mean()
v_numpy = samples_numpy.var(ddof=0)
m_scipy, v_scipy = powerlognorm_dist.stats(c_chk, s_chk, moments="mv")
float(m_numpy), float(v_numpy), float(m_scipy), float(v_scipy)
(0.8202735128240072,
0.11734781330343225,
0.820029631513031,
0.11811345419595665)
8) Visualization#
We’ll visualize:
the PDF (and compare to a Monte Carlo histogram)
the CDF (and compare to an empirical CDF)
sample behavior under different parameters
We’ll generate samples using the NumPy-only sampler from Section 7.
def ecdf(samples: np.ndarray):
x = np.sort(np.asarray(samples))
y = np.arange(1, x.size + 1) / x.size
return x, y
c_viz, s_viz = 2.5, 0.7
n_viz = 120_000
samples = powerlognorm_rvs_numpy(c_viz, s_viz, size=n_viz, rng=rng)
x_lo = float(powerlognorm_dist.ppf(0.001, c_viz, s_viz))
x_hi = float(powerlognorm_dist.ppf(0.995, c_viz, s_viz))
x_grid = np.geomspace(max(x_lo, 1e-12), x_hi, 600)
# PDF + histogram
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=samples,
nbinsx=80,
histnorm="probability density",
name="Monte Carlo samples",
opacity=0.55,
)
)
fig.add_trace(
go.Scatter(
x=x_grid,
y=powerlognorm_dist.pdf(x_grid, c_viz, s_viz),
mode="lines",
name="Theoretical PDF (SciPy)",
line=dict(width=3),
)
)
fig.update_layout(
title=f"powerlognorm(c={c_viz:g}, s={s_viz:g}): PDF vs histogram",
xaxis_title="x",
yaxis_title="density",
)
fig.update_xaxes(type="log")
fig.show()
# CDF + empirical CDF
x_ecdf, y_ecdf = ecdf(samples)
fig = go.Figure()
fig.add_trace(go.Scatter(x=x_ecdf, y=y_ecdf, mode="lines", name="Empirical CDF"))
fig.add_trace(
go.Scatter(
x=x_grid,
y=powerlognorm_dist.cdf(x_grid, c_viz, s_viz),
mode="lines",
name="Theoretical CDF (SciPy)",
line=dict(width=3),
)
)
fig.update_layout(
title=f"powerlognorm(c={c_viz:g}, s={s_viz:g}): CDF vs empirical CDF",
xaxis_title="x",
yaxis_title="F(x)",
)
fig.update_xaxes(type="log")
fig.show()
9) SciPy integration (scipy.stats.powerlognorm)#
SciPy’s implementation is scipy.stats.powerlognorm.
Shape parameters are passed as
(c, s).locshifts andscalerescales as usual.
Common operations:
powerlognorm.pdf(x, c, s, loc=..., scale=...)powerlognorm.cdf(x, c, s, loc=..., scale=...)powerlognorm.rvs(c, s, size=..., random_state=...)powerlognorm.fit(data, ...)for MLE fitting
Also consider the numerically stable variants logpdf, logcdf, sf, and logsf.
# Frozen distribution
rv = powerlognorm_dist(c_viz, s_viz)
# Evaluate pdf/cdf at a point
x0 = 1.0
rv.pdf(x0), rv.cdf(x0), rv.sf(x0)
(0.5037406995962112, 0.8232233047033631, 0.1767766952966369)
# Fit parameters to data (here: samples from the standard form).
# To recover (c, s) cleanly, we fix loc=0 and scale=1.
c_hat, s_hat, loc_hat, scale_hat = powerlognorm_dist.fit(samples, floc=0.0, fscale=1.0)
(c_viz, s_viz), (float(c_hat), float(s_hat)), (float(loc_hat), float(scale_hat))
((2.5, 0.7), (2.4953664703862923, 0.6993813269605423), (0.0, 1.0))
10) Statistical use cases#
Hypothesis testing (lognormal vs powerlognorm)#
Because \(c=1\) reduces to the lognormal, you can test
\(H_0: c=1\) (lognormal)
\(H_1: c\ne 1\) (powerlognorm)
via a likelihood ratio test (LRT). Under regularity conditions, the LRT statistic is asymptotically \(\chi^2\) with 1 degree of freedom.
Bayesian modeling#
A common Bayesian use is as a likelihood for strictly positive measurements where a lognormal baseline is plausible but you want extra flexibility:
The integer-\(c\) story provides a structural prior: \(c\) can represent the number of competing risks/components.
Generative modeling#
When you want synthetic first-hit / earliest-event times with lognormal-ish noise:
sample \(c\) latent lognormal times and take the minimum (integer \(c\))
or sample directly from
powerlognorm(real-valued \(c\))
You get a tunable family that interpolates between heavier tails (\(c<1\)) and smaller, earlier minima (\(c>1\)).
# Likelihood ratio test demo: H0 c=1 (lognormal) vs H1 free c
from scipy.stats import chi2
# Simulate from a non-lognormal powerlognorm
c_alt, s_alt = 2.0, 0.6
x_test = powerlognorm_dist.rvs(c_alt, s_alt, size=6000, random_state=rng)
# Fit H1: estimate (c, s) with loc=0, scale=1
c1, s1, _, _ = powerlognorm_dist.fit(x_test, floc=0.0, fscale=1.0)
ll1 = np.sum(powerlognorm_dist.logpdf(x_test, c1, s1, loc=0.0, scale=1.0))
# Fit H0: fix c=1, estimate s with loc=0, scale=1
c0, s0, _, _ = powerlognorm_dist.fit(x_test, f0=1.0, floc=0.0, fscale=1.0)
ll0 = np.sum(powerlognorm_dist.logpdf(x_test, c0, s0, loc=0.0, scale=1.0))
lrt = 2.0 * (ll1 - ll0)
p_value = float(chi2.sf(lrt, df=1))
{
"true": {"c": c_alt, "s": s_alt},
"H1_fit": {"c": float(c1), "s": float(s1)},
"H0_fit": {"c": float(c0), "s": float(s0)},
"LRT": float(lrt),
"p_value": p_value,
}
{'true': {'c': 2.0, 's': 0.6},
'H1_fit': {'c': 1.985513381597837, 's': 0.5955967131454634},
'H0_fit': {'c': 1.0, 's': 0.5949218749999997},
'LRT': 2274.3346858406258,
'p_value': 0.0}
11) Pitfalls#
Invalid parameters: \(c\le 0\) or \(s\le 0\) is invalid; the standard-form support is \(x>0\).
Log at / below zero: formulas use \(\log x\); handle \(x\le 0\) explicitly (PDF=0, logpdf=\(-\infty\), CDF=0).
Numerical stability in the tail:
\(\Phi(-\log x/s)\) can underflow for large \(x\).
prefer
logcdf/logsfand compute powers in log space: \(\log\,\text{sf} = c\,\log\Phi(-\log x/s)\).
Fitting can be fragile:
with
locandscalefree, the optimization can be poorly identified; if your data are strictly positive, it often helps to fixloc=0.MLE for heavy-tailed families can be sensitive to outliers; inspect diagnostics (QQ plots, tail fit).
MGF misconceptions: even though mean/variance exist, the MGF diverges for \(t>0\).
12) Summary#
powerlognorm(c, s)is a continuous distribution on \((0,\infty)\) with \(c>0\) and \(s>0\).It generalizes the lognormal: \(c=1\) gives a lognormal (\(\log X\sim\mathcal{N}(0,s^2)\) in standard form).
Its survival function is a powered normal tail: \(\mathbb{P}(X>x)=\Phi(-\log x/s)^c\).
For integer \(c\), it is the minimum of \(c\) i.i.d. lognormal variables (time-to-first-failure interpretation).
Mean/variance/etc. exist but typically require numerical evaluation; the MGF does not exist for \(t>0\).
Sampling is easy via inverse transform once you can compute the normal quantile; SciPy provides robust
pdf/cdf/rvs/fitimplementations.